55 research outputs found
Automatic summarising: factors and directions
This position paper suggests that progress with automatic summarising demands
a better research methodology and a carefully focussed research strategy. In
order to develop effective procedures it is necessary to identify and respond
to the context factors, i.e. input, purpose, and output factors, that bear on
summarising and its evaluation. The paper analyses and illustrates these
factors and their implications for evaluation. It then argues that this
analysis, together with the state of the art and the intrinsic difficulty of
summarising, imply a nearer-term strategy concentrating on shallow, but not
surface, text analysis and on indicative summarising. This is illustrated with
current work, from which a potentially productive research programme can be
developed
Towards better nlp system evaluation
This paper considers key elements of evaluation methodology, indicating the many points involved and advocating an unpacking approach in specifying an evaluation remit and design. Recognising the importance of both environment variables and system parameters leads to a grid organisation for tests. The paper illustrates the application of these notions through two examples. 1
UK national programmers in natural language research
Resumen de la ponencia presentada por la autora realizado por el Comité de Redacción SEPLN
- …